35 research outputs found
BayOTIDE: Bayesian Online Multivariate Time series Imputation with functional decomposition
In real-world scenarios like traffic and energy, massive time-series data
with missing values and noises are widely observed, even sampled irregularly.
While many imputation methods have been proposed, most of them work with a
local horizon, which means models are trained by splitting the long sequence
into batches of fit-sized patches. This local horizon can make models ignore
global trends or periodic patterns. More importantly, almost all methods assume
the observations are sampled at regular time stamps, and fail to handle complex
irregular sampled time series arising from different applications. Thirdly,
most existing methods are learned in an offline manner. Thus, it is not
suitable for many applications with fast-arriving streaming data. To overcome
these limitations, we propose \ours: Bayesian Online Multivariate Time series
Imputation with functional decomposition. We treat the multivariate time series
as the weighted combination of groups of low-rank temporal factors with
different patterns. We apply a group of Gaussian Processes (GPs) with different
kernels as functional priors to fit the factors. For computational efficiency,
we further convert the GPs into a state-space prior by constructing an
equivalent stochastic differential equation (SDE), and developing a scalable
algorithm for online inference. The proposed method can not only handle
imputation over arbitrary time stamps, but also offer uncertainty
quantification and interpretability for the downstream application. We evaluate
our method on both synthetic and real-world datasets
A Kernel Approach for PDE Discovery and Operator Learning
This article presents a three-step framework for learning and solving partial
differential equations (PDEs) using kernel methods. Given a training set
consisting of pairs of noisy PDE solutions and source/boundary terms on a mesh,
kernel smoothing is utilized to denoise the data and approximate derivatives of
the solution. This information is then used in a kernel regression model to
learn the algebraic form of the PDE. The learned PDE is then used within a
kernel based solver to approximate the solution of the PDE with a new
source/boundary term, thereby constituting an operator learning framework.
Numerical experiments compare the method to state-of-the-art algorithms and
demonstrate its competitive performance
A Metalearning Approach for Physics-Informed Neural Networks (PINNs): Application to Parameterized PDEs
Physics-informed neural networks (PINNs) as a means of discretizing partial
differential equations (PDEs) are garnering much attention in the Computational
Science and Engineering (CS&E) world. At least two challenges exist for PINNs
at present: an understanding of accuracy and convergence characteristics with
respect to tunable parameters and identification of optimization strategies
that make PINNs as efficient as other computational science tools. The cost of
PINNs training remains a major challenge of Physics-informed Machine Learning
(PiML) - and, in fact, machine learning (ML) in general. This paper is meant to
move towards addressing the latter through the study of PINNs on new tasks, for
which parameterized PDEs provides a good testbed application as tasks can be
easily defined in this context. Following the ML world, we introduce
metalearning of PINNs with application to parameterized PDEs. By introducing
metalearning and transfer learning concepts, we can greatly accelerate the
PINNs optimization process. We present a survey of model-agnostic metalearning,
and then discuss our model-aware metalearning applied to PINNs as well as
implementation considerations and algorithmic complexity. We then test our
approach on various canonical forward parameterized PDEs that have been
presented in the emerging PINNs literature